Integrating your Chatbot into a Web Interface

cognitiveclass.ai logo

Introduction

In this lab, you will learn to set up a back-end server and integrate your chatbot into a web application.

Learning objectives

After completing this lab, you will be able to:

  • Set up your back-end server
  • Integrate your chatbot into your Flask server
  • Communicate with the back-end using a web page

Prerequisites

This section assumes you know how to build the simple terminal chatbot explained in the first lab.

There are two things you must build to create your ChatGPT-like website:

  1. A back-end server that hosts your chatbot
  2. A front-end webpage that communicates with your back-end server

Without further ado, let's get started!

Step 1: Hosting your chatbot on a backend server

What is a backend server?

A backend server is like the brain behind a website or application. In this case, the backend server will receive prompts from your website, feed them into your chatbot, and return the output of the chatbot back to the website, which will be read by the user.

Hosting a simple backend server using Flask

Note: Consider using a requirements.txt file

flask is a Python framework for building web applications with Python. It provides a set of tools and functionalities to handle incoming requests, process data, and generate responses, making it easy to power your website or application.

Prerequisites

For all terminal interactions in this lab (such as running python files or installing packages), you will use the built-in terminal that comes with Cloud IDE. You may launch the terminal by either:

  • Pressing Ctrl + ` , or
  • By selecting Terminal –> New Terminal from the toolbar at the top of the IDE window on the right.

In your terminal, let's install the following requisites:

  1. 1
  2. 2
  1. python3.11 -m pip install flask
  2. python3.11 -m pip install flask_cors

Setting up the server

Next, you will create a script that stores your flask server code.

To create a new Python file, Click on File Explorer, then right-click in the explorer area and select New File. Name this new file app.py.

Let's take a look at how to implement a simple flask server:

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  1. from flask import Flask
  2. app = Flask(__name__)
  3. @app.route('/')
  4. def home():
  5. return 'Hello, World!'
  6. if __name__ == '__main__':
  7. app.run()

Paste the above code in the app.py file you just created and save it.

In this code:

  • You import the Flask class from the flask module.
  • You create an instance of the Flask class and assign it to the variable app.
  • You define a route for the homepage by decorating the home() function with the @app.route() decorator. The function returns the string 'Hello, World!'. This means that when the user visits the URL where the website is hosted, the backend server will receive the request and return 'Hello, World!' to the user.
  • The if __name__ == '__main__': condition ensures that the server is only run if the script is executed directly, not when imported as a module.
  • Finally, you call app.run() to start the server.
  1. 1
  1. python3.11 app.py

Save this code in a Python file, for example, app.py, and run it by typing python app.py in the terminal. By default, Flask hosts the server at http://127.0.0.1:5000/ (which is equivalent to http://127.0.0.1:5000/).

With this command, the Flask server will start running. If you run this server on your local machine, then you can access it by visiting http://127.0.0.1:5000/ or http://localhost:5000/ in your web browser.

However, you are currently running this lab in the Skills Network Cloud. Thus, you can access your server as follows:

  1. Navigate to the Skills Network Toolbox from the toolbar on the left side of the IDE
  2. Click "Launch Application" in the adjacent vertical sidebar
  3. Enter 5000 as your Application Port
  4. Launch the application in a new browser tab

By performing the above steps, you have visited the relative localhost URL of the cloud server.

IMPORTANT: Throughout the rest of this lab, you will refer to this URL as <HOST>.

On visiting the localhost, you should see the "Hello, World!" message displayed.

Here's what it should look like:


Let's add the following routes to try it out:

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  1. @app.route('/bananas')
  2. def bananas():
  3. return 'This page has bananas!'
  4. @app.route('/bread')
  5. def bread():
  6. return 'This page has bread!'

Now, let's stop our app using Ctrl + C in the terminal and re-run with flask run. Then, let's visit both of these routes at http://<HOST>/bananas and http://<HOST>/bread. Here's what you should see:


Okay - now that you've demonstrated how routes work, you can remove these two routes (bananas and bread) from your app.py as you won't be using them.

Before proceeding, you'll also add two more lines of code to your program to mitigate CORS errors - a type of error related to making requests to domains other than the one that hosts this webpage.

You'll be modifying your code as follows:

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  1. from flask import Flask
  2. from flask_cors import CORS # newly added
  3. app = Flask(__name__)
  4. CORS(app) # newly added
  5. @app.route('/')
  6. def home():
  7. return 'Hello, World!'
  8. if __name__ == '__main__':
  9. app.run()

Integrating your chatbot into your Flask server

Now that you have your Flask server set up, let's integrate your chatbot into your Flask server.

As stated at the beginning, this lab assumes you've completed the first lab of this guided project on how to create your own simple chatbot.

First, you'll install the requisites

  1. 1
  1. python3.11 -m pip install transformers torch

Next, let’s copy the code to initialize your chatbot from lab 1 and place it at the top of your script. You also must import the necessary libraries for your chatbot.

  1. 1
  2. 2
  1. from transformers import AutoModelForSeq2SeqLM
  2. from transformers import AutoTokenizer
  1. 1
  2. 2
  3. 3
  4. 4
  1. model_name = "facebook/blenderbot-400M-distill"
  2. model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
  3. tokenizer = AutoTokenizer.from_pretrained(model_name)
  4. conversation_history = []

Next, you'll need to import a couple more modules to read the data.

  1. 1
  2. 2
  1. from flask import request
  2. import json

Before implementing the actual function though, you need to determine the structure you expect to receive in the incoming HTTP request.

Let's define your expected structure as follows:

  1. 1
  2. 2
  3. 3
  1. {
  2. 'prompt': 'message'
  3. }

Now implement your chatbot function. Again, you'll copy code over from your chatbot implementation from the first lab.

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  1. @app.route('/chatbot', methods=['POST'])
  2. def handle_prompt():
  3. # Read prompt from HTTP request body
  4. data = request.get_data(as_text=True)
  5. data = json.loads(data)
  6. input_text = data['prompt']
  7. # Create conversation history string
  8. history = "\n".join(conversation_history)
  9. # Tokenize the input text and history
  10. inputs = tokenizer.encode_plus(history, input_text, return_tensors="pt")
  11. # Generate the response from the model
  12. outputs = model.generate(**inputs, max_length= 60) # max_length will acuse model to crash at some point as history grows
  13. # Decode the response
  14. response = tokenizer.decode(outputs[0], skip_special_tokens=True).strip()
  15. # Add interaction to conversation history
  16. conversation_history.append(input_text)
  17. conversation_history.append(response)
  18. return response

The only new thing you've done uptil now is read the prompt from the HTTP request body. You've copied everything else from your previous chatbot implementation!

Perfect, now before testing your application, here's what the final version of your code looks like:

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  1. from flask import Flask, request, render_template
  2. from flask_cors import CORS
  3. import json
  4. from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
  5. app = Flask(__name__)
  6. CORS(app)
  7. model_name = "facebook/blenderbot-400M-distill"
  8. model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
  9. tokenizer = AutoTokenizer.from_pretrained(model_name)
  10. conversation_history = []
  11. @app.route('/chatbot', methods=['POST'])
  12. def handle_prompt():
  13. data = request.get_data(as_text=True)
  14. data = json.loads(data)
  15. input_text = data['prompt']
  16. # Create conversation history string
  17. history = "\n".join(conversation_history)
  18. # Tokenize the input text and history
  19. inputs = tokenizer.encode_plus(history, input_text, return_tensors="pt")
  20. # Generate the response from the model
  21. outputs = model.generate(**inputs, max_length= 60) # max_length will cause the model to crash at some point as history grows
  22. # Decode the response
  23. response = tokenizer.decode(outputs[0], skip_special_tokens=True).strip()
  24. # Add interaction to conversation history
  25. conversation_history.append(input_text)
  26. conversation_history.append(response)
  27. return response
  28. if __name__ == '__main__':
  29. app.run()

Now let's test your implementation by using curl to make a POST request to <HOST>/chatbot with the following request body: {'prompt':'Hello, how are you today?'}.

Open a new terminal: Select terminal tab –> open new terminal

  1. 1
  1. curl -X POST -H "Content-Type: application/json" -d '{"prompt": "Hello, how are you today?"}' 127.0.0.1:5000/chatbot

Here's the output of the above code:

  1. 1
  1. I am doing very well today as well. I am glad to hear you are doing well.

If you got a similar response, then congratulations! You have successfully created a Flask backend server with an integrated chatbot!

After you finish, press cntrl + c to stop the server.